Large language models (LLMs) have demonstrated excellent zero-shot generalization to new language tasks. However, effective utilization of LLMs for zero-shot visual question-answering (VQA) remains challenging, primarily due to the modality disconnection and task disconnection between LLM and VQA task. End-to-end training on vision and language data may bridge the disconnections, but is inflexible and computationally expensive. To address this issue, we propose \emph{Img2Prompt}, a plug-and-play module that provides the prompts that can bridge the aforementioned modality and task disconnections, so that LLMs can perform zero-shot VQA tasks without end-to-end training. In order to provide such prompts, we further employ LLM-agnostic models to provide prompts that can describe image content and self-constructed question-answer pairs, which can effectively guide LLM to perform zero-shot VQA tasks. Img2Prompt offers the following benefits: 1) It can flexibly work with various LLMs to perform VQA. 2)~Without the needing of end-to-end training, it significantly reduces the cost of deploying LLM for zero-shot VQA tasks. 3) It achieves comparable or better performance than methods relying on end-to-end training. For example, we outperform Flamingo~\cite{Deepmind:Flamingo2022} by 5.6\% on VQAv2. On the challenging A-OKVQA dataset, our method even outperforms few-shot methods by as much as 20\%.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译
尖峰神经网络(SNN)是一种具有生物学知识的模型,具有高计算能力和低功耗的优势。虽然对深SNN的培训仍然是一个空旷的问题,但它限制了深SNN的现实应用。在这里,我们提出了一个名为Spiking SiamFC ++的深SNN架构,用于对象跟踪,并通过端到端直接培训。具体而言,Alexnet网络在时间域中扩展以提取该功能,并采用替代梯度功能来实现对深SNN的直接监督培训。为了检查尖峰SiAMFC ++的性能,考虑了几种跟踪基准测试,包括OTB2013,OTB2015,Dot2015,Dot2016和UAV123。发现与原始的siAMFC ++相比,精度损失很小。与现有的基于SNN的目标跟踪器相比,例如暹罗(Siamsnn),提议的Spiking SiamFC ++的精度(连续)达到了85.24%(64.37%),远高于52.78%(44.32%)的精度(64.37%)。 。据我们所知,Spiking SiamFC ++的性能优于基于SNN的对象跟踪中现有的最新方法,该方法为目标跟踪领域中的SNN应用提供了新的路径。这项工作可能会进一步促进SNN算法和神经形态芯片的发展。
translated by 谷歌翻译
生成对抗网络(GAN)的适应旨在将预训练的GAN转移到具有有限培训数据的给定领域。在本文中,我们专注于单次案例,这在以前的作品中更具挑战性,很少探索。我们认为,从源域到目标域的适应性可以分为两个部分:全球样式(如纹理和颜色)的转移,以及不属于源域的新实体的出现。虽然先前的作品主要关注样式转移,但我们提出了一个新颖而简洁的框架\ footNote {\ url {https://github.com/thevoidname/generalized-onerized-one-one-shot-gan-adaption}},以解决\ textit {对样式和实体传输的一般性单发适应性}任务,其中提供了参考图像及其二进制实体掩码。我们的核心目标是通过切成薄片的瓦斯坦距离来限制参考文献和合成的内部分布之间的差距。为了更好地实现这一目标,首先使用样式固定来大致获得模范样式,并将辅助网络引入原始生成器以删除实体和样式传输。此外,为了实现跨域的对应关系,我们提出了变异的拉普拉斯正则化以限制适应性发生器的平滑度。定量和定性实验都证明了我们方法在各种情况下的有效性。
translated by 谷歌翻译
有效分布式参数的快速全局聚合对于联邦学习(FL)至关重要,这需要足够的带宽来进行参数通信和足够的用户数据以进行本地培训。否则,FL可能会花费过多的训练时间来收敛并产生不准确的模型。在本文中,我们提出了一个全新的FL框架,即Pressfl,该框架将联合模型培训取代联合的及时培训,即让联邦参与者培训提示而不是共享模型,以同时实现有效的全球聚合和本地培训通过以分布式方式利用基础模型(FM)的功率来利用数据不足。 ProSTERFL将现成的FM(即剪辑)运送到分布式客户端,这些客户将根据很少的本地数据进行合作培训共享的软提示。由于提示fl只需要更新提示而不是整个模型,因此本地培训和全局聚合都可以大大加速。经过大规模数据训练的FM可以通过训练有素的软提示为分布式用户任务提供强大的适应能力。我们通过广泛的实验对提示进行了经验分析,并在系统的可行性,用户隐私和性能方面表现出了优势。
translated by 谷歌翻译
图像和语言建模对于视觉前训练(VLP)至关重要,该培训旨在从大规模配对的图像文本数据中学习多模式表示。但是,我们观察到,大多数现有的VLP方法着重于建模图像和文本特征之间的相互作用,同时忽略图像和文本之间的信息差异,从而遭受焦点偏见。为了解决这个问题,我们提出了一个视觉语言掩盖自动编码器框架(VLMAE)。VLMAE采用视觉生成学习,促进该模型获得细粒度和公正的特征。与以前的作品不同,Vlmae注意图像中几乎所有关键的补丁,提供了更全面的理解。广泛的实验表明,VLMAE在各种视觉语言下游任务中取得更好的性能,包括视觉问答,即使有20%的预训练速度,图像文本检索和视觉接地也是如此。
translated by 谷歌翻译
半监督学习(SSL)通过利用大量未标记数据来增强有限标记的样品来改善模型的概括。但是,目前,流行的SSL评估协议通常受到计算机视觉(CV)任务的约束。此外,以前的工作通常从头开始训练深层神经网络,这是耗时且环境不友好的。为了解决上述问题,我们通过从简历,自然语言处理(NLP)和音频处理(AUDIO)中选择15种不同,具有挑战性和全面的任务来构建统一的SSL基准(USB),我们会系统地评估主导的SSL方法,以及开源的一个模块化和可扩展的代码库,以对这些SSL方法进行公平评估。我们进一步为简历任务提供了最新的神经模型的预训练版本,以使成本负担得起,以进行进一步调整。 USB启用对来自多个域的更多任务的单个SSL算法的评估,但成本较低。具体而言,在单个NVIDIA V100上,仅需要37个GPU天才能在USB中评估15个任务的FIXMATCH,而335 GPU天(除ImageNet以外的4个CV数据集中的279 GPU天)在使用典型协议的5个CV任务上需要进行5个CV任务。
translated by 谷歌翻译
在本文中,我们提出了与IEEE计算机协会在CVPR 2022上同时与IEEE计算机协会研讨会同时举行的多手术检测挑战。我们的多手术检测挑战旨在检测自动图像操作,包括但不限于图像编辑,图像合成,图像合成,图像,图像,图像,图像合成,图像,图像编辑一代,图像Photoshop等。我们的挑战吸引了来自世界各地的674支团队,约有2000个有效的结果提交数量。我们邀请了前十支球队为挑战提供解决方案,其中三支球队在大结局中获得了奖项。在本文中,我们介绍了前三名团队的解决方案,以增强图像伪造检测领域的研究工作。
translated by 谷歌翻译
在本文中,我们介绍了McTensor,这是一个基于Pytorch的库,用于为DL培训提供通用和高精度算术。MCTENSOR的使用方式与Pytorch Tensor相同:我们为具有相同的Pytorch接口的MCTENSOR实施了多个基本的,矩阵级计算运算符和NN模块。我们的算法获得了高精度计算,并且还受益于重优化的Pytorch浮点算术算术。我们针对一系列任务评估了针对Pytorch天然算术的mctensor算术,其中使用float16中使用mctensor的模型将与float32或float64精度相匹配或优于pytorch模型。
translated by 谷歌翻译